Search Results

Documents authored by Haeupler, Bernhard


Document
Almost Universally Optimal Distributed Laplacian Solvers via Low-Congestion Shortcuts

Authors: Ioannis Anagnostides, Christoph Lenzen, Bernhard Haeupler, Goran Zuzic, and Themis Gouleakis

Published in: LIPIcs, Volume 246, 36th International Symposium on Distributed Computing (DISC 2022)


Abstract
In this paper, we refine the (almost) existentially optimal distributed Laplacian solver recently developed by Forster, Goranci, Liu, Peng, Sun, and Ye (FOCS `21) into an (almost) universally optimal distributed Laplacian solver. Specifically, when the topology is known (i.e., the Supported-CONGEST model), we show that any Laplacian system on an n-node graph with shortcut quality SQ(G) can be solved after n^{o(1)} SQ(G) log(1/ε) rounds, where ε is the required accuracy. This almost matches our lower bound that guarantees that any correct algorithm on G requires Ω̃(SQ(G)) rounds, even for a crude solution with ε ≤ 1/2. Several important implications hold in the unknown-topology (i.e., standard CONGEST) case: for excluded-minor graphs we get an almost universally optimal algorithm that terminates in D ⋅ n^{o(1)} log(1/ε) rounds, where D is the hop-diameter of the network; as well as n^{o(1)} log (1/ε)-round algorithms for the case of SQ(G) ≤ n^{o(1)}, which holds for most networks of interest. Conditioned on improvements in state-of-the-art constructions of low-congestion shortcuts, the CONGEST results will match the Supported-CONGEST ones. Moreover, following a recent line of work in distributed algorithms, we consider a hybrid communication model which enhances CONGEST with limited global power in the form of the node-capacitated clique (NCC) model. In this model, we show the existence of a Laplacian solver with round complexity n^{o(1)} log(1/ε). The unifying thread of these results, and our main technical contribution, is the study of a novel ρ-congested generalization of the standard part-wise aggregation problem. We develop near-optimal algorithms for this primitive in the Supported-CONGEST model, almost-optimal algorithms in (standard) CONGEST (with the additional overhead due to standard barriers), as well as a simple algorithm for bounded-treewidth graphs with a quadratic dependence on the congestion ρ. This primitive can be readily used to accelerate the Laplacian solver of Forster, Goranci, Liu, Peng, Sun, and Ye, and we believe it will find further independent applications in the future.

Cite as

Ioannis Anagnostides, Christoph Lenzen, Bernhard Haeupler, Goran Zuzic, and Themis Gouleakis. Almost Universally Optimal Distributed Laplacian Solvers via Low-Congestion Shortcuts. In 36th International Symposium on Distributed Computing (DISC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 246, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{anagnostides_et_al:LIPIcs.DISC.2022.6,
  author =	{Anagnostides, Ioannis and Lenzen, Christoph and Haeupler, Bernhard and Zuzic, Goran and Gouleakis, Themis},
  title =	{{Almost Universally Optimal Distributed Laplacian Solvers via Low-Congestion Shortcuts}},
  booktitle =	{36th International Symposium on Distributed Computing (DISC 2022)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-255-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{246},
  editor =	{Scheideler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2022.6},
  URN =		{urn:nbn:de:0030-drops-171978},
  doi =		{10.4230/LIPIcs.DISC.2022.6},
  annote =	{Keywords: Distributed algorithms, Laplacian solvers, low-congestion shortcuts}
}
Document
Invited Talk
The Quest for Universally-Optimal Distributed Algorithms (Invited Talk)

Authors: Bernhard Haeupler

Published in: LIPIcs, Volume 209, 35th International Symposium on Distributed Computing (DISC 2021)


Abstract
Many distributed optimization algorithms achieve an existentially-optimal round complexity (of (Õ(√n + D)), i.e., there exists some pathological worst-case topology on which no algorithm can be faster. However, most networks of interest allow for exponentially faster algorithms. This motivates two questions: - What network topology parameters determine the complexity of distributed optimization? - Are there universally-optimal algorithms that are as fast as possible on every single topology? This talk provides an overview over the freshly-completed 6-year program that resolves these 25-year-old open problems for a wide class of global network optimization problems including MST, (1+ε)-min cut, various approximate shortest path problems, sub-graph connectivity, etc. We provide several equivalent graph parameters that are tight universal lower bounds for the above problems, fully characterizing their inherent complexity. We also give the first universally-optimal algorithms approximately achieving this complexity on every topology. The quest for universally-optimal distributed algorithms required novel techniques that also answer fundamental (open) questions in seemingly unrelated fields, such as, network information theory, approximation algorithms, (oblivious) packet routing, (algorithmic & topological) graph theory, and metric embeddings. Generally, the problems addressed in these fields explicitly or implicitly ask to jointly optimize 𝓁_∞ & 𝓁₁ parameters such as congestion & dilation, communication rate & delay, capacities & diameters of subnetworks, or the makespan of packet routings. In particular, results obtained on the way include the following firsts: (Congestion+Dilation)-Competitive Oblivious Routing, Network Coding Gaps for Completion-Times, Hop-Constrained Expanders & Expander Decompositions, Bi-Criteria (Online / Demand-Robust) Approximation Algorithms for many Diameter-Constrained Network Design Problems (e.g., (Group) Steiner Tree/Forest), Makespan-Competitive (Compact and Distributed) Routing Tables, and (Probabilistic) Tree Embeddings for Hop-Constrained Distances. (Joint work with M. Ghaffari, G. Zuzic, D.E. Hershkowitz, D. Wajc, J. Li, H. Raecke, T. Izumi)

Cite as

Bernhard Haeupler. The Quest for Universally-Optimal Distributed Algorithms (Invited Talk). In 35th International Symposium on Distributed Computing (DISC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 209, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haeupler:LIPIcs.DISC.2021.1,
  author =	{Haeupler, Bernhard},
  title =	{{The Quest for Universally-Optimal Distributed Algorithms}},
  booktitle =	{35th International Symposium on Distributed Computing (DISC 2021)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-210-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{209},
  editor =	{Gilbert, Seth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2021.1},
  URN =		{urn:nbn:de:0030-drops-148030},
  doi =		{10.4230/LIPIcs.DISC.2021.1},
  annote =	{Keywords: Distributed algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Near-Optimal Schedules for Simultaneous Multicasts

Authors: Bernhard Haeupler, D. Ellis Hershkowitz, and David Wajc

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We study the store-and-forward packet routing problem for simultaneous multicasts, in which multiple packets have to be forwarded along given trees as fast as possible. This is a natural generalization of the seminal work of Leighton, Maggs and Rao, which solved this problem for unicasts, i.e. the case where all trees are paths. They showed the existence of asymptotically optimal O(C + D)-length schedules, where the congestion C is the maximum number of packets sent over an edge and the dilation D is the maximum depth of a tree. This improves over the trivial O(CD) length schedules. We prove a lower bound for multicasts, which shows that there do not always exist schedules of non-trivial length, o(CD). On the positive side, we construct O(C+D+log² n)-length schedules in any n-node network. These schedules are near-optimal, since our lower bound shows that this length cannot be improved to O(C+D) + o(log n).

Cite as

Bernhard Haeupler, D. Ellis Hershkowitz, and David Wajc. Near-Optimal Schedules for Simultaneous Multicasts. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 78:1-78:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2021.78,
  author =	{Haeupler, Bernhard and Hershkowitz, D. Ellis and Wajc, David},
  title =	{{Near-Optimal Schedules for Simultaneous Multicasts}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{78:1--78:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.78},
  URN =		{urn:nbn:de:0030-drops-141471},
  doi =		{10.4230/LIPIcs.ICALP.2021.78},
  annote =	{Keywords: Packet routing, multicast, scheduling algorithms}
}
Document
Computation-Aware Data Aggregation

Authors: Bernhard Haeupler, D. Ellis Hershkowitz, Anson Kahng, and Ariel D. Procaccia

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Data aggregation is a fundamental primitive in distributed computing wherein a network computes a function of every nodes' input. However, while compute time is non-negligible in modern systems, standard models of distributed computing do not take compute time into account. Rather, most distributed models of computation only explicitly consider communication time. In this paper, we introduce a model of distributed computation that considers both computation and communication so as to give a theoretical treatment of data aggregation. We study both the structure of and how to compute the fastest data aggregation schedule in this model. As our first result, we give a polynomial-time algorithm that computes the optimal schedule when the input network is a complete graph. Moreover, since one may want to aggregate data over a pre-existing network, we also study data aggregation scheduling on arbitrary graphs. We demonstrate that this problem on arbitrary graphs is hard to approximate within a multiplicative 1.5 factor. Finally, we give an O(log n ⋅ log(OPT/t_m))-approximation algorithm for this problem on arbitrary graphs, where n is the number of nodes and OPT is the length of the optimal schedule.

Cite as

Bernhard Haeupler, D. Ellis Hershkowitz, Anson Kahng, and Ariel D. Procaccia. Computation-Aware Data Aggregation. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 65:1-65:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ITCS.2020.65,
  author =	{Haeupler, Bernhard and Hershkowitz, D. Ellis and Kahng, Anson and Procaccia, Ariel D.},
  title =	{{Computation-Aware Data Aggregation}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{65:1--65:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.65},
  URN =		{urn:nbn:de:0030-drops-117506},
  doi =		{10.4230/LIPIcs.ITCS.2020.65},
  annote =	{Keywords: Data aggregation, distributed algorithm scheduling, approximation algorithms}
}
Document
Erasure Correction for Noisy Radio Networks

Authors: Keren Censor-Hillel, Bernhard Haeupler, D. Ellis Hershkowitz, and Goran Zuzic

Published in: LIPIcs, Volume 146, 33rd International Symposium on Distributed Computing (DISC 2019)


Abstract
The radio network model is a well-studied model of wireless, multi-hop networks. However, radio networks make the strong assumption that messages are delivered deterministically. The recently introduced noisy radio network model relaxes this assumption by dropping messages independently at random. In this work we quantify the relative computational power of noisy radio networks and classic radio networks. In particular, given a non-adaptive protocol for a fixed radio network we show how to reliably simulate this protocol if noise is introduced with a multiplicative cost of poly(log Delta, log log n) rounds where n is the number nodes in the network and Delta is the max degree. Moreover, we demonstrate that, even if the simulated protocol is not non-adaptive, it can be simulated with a multiplicative O(Delta log ^2 Delta) cost in the number of rounds. Lastly, we argue that simulations with a multiplicative overhead of o(log Delta) are unlikely to exist by proving that an Omega(log Delta) multiplicative round overhead is necessary under certain natural assumptions.

Cite as

Keren Censor-Hillel, Bernhard Haeupler, D. Ellis Hershkowitz, and Goran Zuzic. Erasure Correction for Noisy Radio Networks. In 33rd International Symposium on Distributed Computing (DISC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 146, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.DISC.2019.10,
  author =	{Censor-Hillel, Keren and Haeupler, Bernhard and Hershkowitz, D. Ellis and Zuzic, Goran},
  title =	{{Erasure Correction for Noisy Radio Networks}},
  booktitle =	{33rd International Symposium on Distributed Computing (DISC 2019)},
  pages =	{10:1--10:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-126-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{146},
  editor =	{Suomela, Jukka},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2019.10},
  URN =		{urn:nbn:de:0030-drops-113170},
  doi =		{10.4230/LIPIcs.DISC.2019.10},
  annote =	{Keywords: radio networks, erasure correction, noisy radio networks, protocol simulation, distributed computing models}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Optimal Strategies for Patrolling Fences

Authors: Bernhard Haeupler, Fabian Kuhn, Anders Martinsson, Kalina Petrova, and Pascal Pfister

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
A classical multi-agent fence patrolling problem asks: What is the maximum length L of a line fence that k agents with maximum speeds v_1,..., v_k can patrol if each point on the line needs to be visited at least once every unit of time. It is easy to see that L = alpha sum_{i=1}^k v_i for some efficiency alpha in [1/2,1). After a series of works [Czyzowicz et al., 2011; Dumitrescu et al., 2014; Kawamura and Kobayashi, 2015; Kawamura and Soejima, 2015] giving better and better efficiencies, it was conjectured by Kawamura and Soejima [Kawamura and Soejima, 2015] that the best possible efficiency approaches 2/3. No upper bounds on the efficiency below 1 were known. We prove the first such upper bounds and tightly bound the optimal efficiency in terms of the minimum speed ratio s = {v_{max}}/{v_{min}} and the number of agents k. Our bounds of alpha <= 1/{1 + 1/s} and alpha <= 1 - 1/(sqrt{k)+1} imply that in order to achieve efficiency 1 - epsilon, at least k >= Omega(epsilon^{-2}) agents with a speed ratio of s >= Omega(epsilon^{-1}) are necessary. Guided by our upper bounds, we construct a scheme whose efficiency approaches 1, disproving the conjecture stated above. Our scheme asymptotically matches our upper bounds in terms of the maximal speed difference and the number of agents used. A variation of the fence patrolling problem considers a circular fence instead and asks for its circumference to be maximized. We consider the unidirectional case of this variation, where all agents are only allowed to move in one direction, say clockwise. At first, a strategy yielding L = max_{r in [k]} r * v_r where v_1 >= v_2 >= ... >= v_k was conjectured to be optimal by Czyzowicz et al. [Czyzowicz et al., 2011] This was proven not to be the case by giving constructions for only specific numbers of agents with marginal improvements of L. We give a general construction that yields L = 1/{33 log_e log_2(k)} sum_{i=1}^k v_i for any set of agents, which in particular for the case 1, 1/2, ..., 1/k diverges as k - > infty, thus resolving a conjecture by Kawamura and Soejima [Kawamura and Soejima, 2015] affirmatively.

Cite as

Bernhard Haeupler, Fabian Kuhn, Anders Martinsson, Kalina Petrova, and Pascal Pfister. Optimal Strategies for Patrolling Fences. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 144:1-144:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2019.144,
  author =	{Haeupler, Bernhard and Kuhn, Fabian and Martinsson, Anders and Petrova, Kalina and Pfister, Pascal},
  title =	{{Optimal Strategies for Patrolling Fences}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{144:1--144:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.144},
  URN =		{urn:nbn:de:0030-drops-107202},
  doi =		{10.4230/LIPIcs.ICALP.2019.144},
  annote =	{Keywords: multi-agent systems, patrolling algorithms}
}
Document
Allocate-On-Use Space Complexity of Shared-Memory Algorithms

Authors: James Aspnes, Bernhard Haeupler, Alexander Tong, and Philipp Woelfel

Published in: LIPIcs, Volume 121, 32nd International Symposium on Distributed Computing (DISC 2018)


Abstract
Many fundamental problems in shared-memory distributed computing, including mutual exclusion [James E. Burns and Nancy A. Lynch, 1993], consensus [Leqi Zhu, 2016], and implementations of many sequential objects [Prasad Jayanti et al., 2000], are known to require linear space in the worst case. However, these lower bounds all work by constructing particular executions for any given algorithm that may be both very long and very improbable. The significance of these bounds is justified by an assumption that any space that is used in some execution must be allocated for all executions. This assumption is not consistent with the storage allocation mechanisms of actual practical systems. We consider the consequences of adopting a per-execution approach to space complexity, where an object only counts toward the space complexity of an execution if it is used in that execution. This allows us to show that many known randomized algorithms for fundamental problems in shared-memory distributed computing have expected space complexity much lower than the worst-case lower bounds, and that many algorithms that are adaptive in time complexity can also be made adaptive in space complexity. For the specific problem of mutual exclusion, we develop a new algorithm that illustrates an apparent trade-off between low expected space complexity and low expected RMR complexity. Whether this trade-off is necessary is an open problem. For some applications, it may be helpful to pay only for objects that are updated, as opposed to those that are merely read. We give a data structure that requires no space to represent objects that are not updated at the cost of a small overhead on those that are.

Cite as

James Aspnes, Bernhard Haeupler, Alexander Tong, and Philipp Woelfel. Allocate-On-Use Space Complexity of Shared-Memory Algorithms. In 32nd International Symposium on Distributed Computing (DISC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 121, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aspnes_et_al:LIPIcs.DISC.2018.8,
  author =	{Aspnes, James and Haeupler, Bernhard and Tong, Alexander and Woelfel, Philipp},
  title =	{{Allocate-On-Use Space Complexity of Shared-Memory Algorithms}},
  booktitle =	{32nd International Symposium on Distributed Computing (DISC 2018)},
  pages =	{8:1--8:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-092-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{121},
  editor =	{Schmid, Ulrich and Widder, Josef},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2018.8},
  URN =		{urn:nbn:de:0030-drops-97974},
  doi =		{10.4230/LIPIcs.DISC.2018.8},
  annote =	{Keywords: Space complexity, memory allocation, mutual exclusion}
}
Document
Faster Distributed Shortest Path Approximations via Shortcuts

Authors: Bernhard Haeupler and Jason Li

Published in: LIPIcs, Volume 121, 32nd International Symposium on Distributed Computing (DISC 2018)


Abstract
A long series of recent results and breakthroughs have led to faster and better distributed approximation algorithms for single source shortest paths (SSSP) and related problems in the CONGEST model. The runtime of all these algorithms, however, is Omega~(sqrt{n}), regardless of the network topology, even on nice networks with a (poly)logarithmic network diameter D. While this is known to be necessary for some pathological networks, most topologies of interest are arguably not of this type. We give the first distributed approximation algorithms for shortest paths problems that adjust to the topology they are run on, thus achieving significantly faster running times on many topologies of interest. The running time of our algorithms depends on and is close to Q, where Q is the quality of the best shortcut that exists for the given topology. While Q = Theta~(sqrt{n} + D) for pathological worst-case topologies, many topologies of interest have Q = Theta~(D), which results in near instance optimal running times for our algorithm, given the trivial Omega(D) lower bound. The problems we consider are as follows: - an approximate shortest path tree and SSSP distances, - a polylogarithmic size distance label for every node such that from the labels of any two nodes alone one can determine their distance (approximately), and - an (approximately) optimal flow for the transshipment problem. Our algorithms have a tunable tradeoff between running time and approximation ratio. Our fastest algorithms have an arbitrarily good polynomial approximation guarantee and an essentially optimal O~(Q) running time. On the other end of the spectrum, we achieve polylogarithmic approximations in O~(Q * n^epsilon) rounds for any epsilon > 0. It seems likely that eventually, our non-trivial approximation algorithms for the SSSP tree and transshipment problem can be bootstrapped to give fast Q * 2^O(sqrt{log n log log n}) round (1+epsilon)-approximation algorithms using a recent result by Becker et al.

Cite as

Bernhard Haeupler and Jason Li. Faster Distributed Shortest Path Approximations via Shortcuts. In 32nd International Symposium on Distributed Computing (DISC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 121, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.DISC.2018.33,
  author =	{Haeupler, Bernhard and Li, Jason},
  title =	{{Faster Distributed Shortest Path Approximations via Shortcuts}},
  booktitle =	{32nd International Symposium on Distributed Computing (DISC 2018)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-092-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{121},
  editor =	{Schmid, Ulrich and Widder, Josef},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2018.33},
  URN =		{urn:nbn:de:0030-drops-98229},
  doi =		{10.4230/LIPIcs.DISC.2018.33},
  annote =	{Keywords: Distributed Graph Algorithms, Shortest Path, Shortcuts}
}
Document
Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions

Authors: Bernhard Haeupler, Amirbehshad Shahrasbi, and Ellen Vitercik

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We present many new results related to reliable (interactive) communication over insertion-deletion channels. Synchronization errors, such as insertions and deletions, strictly generalize the usual symbol corruption errors and are much harder to protect against. We show how to hide the complications of synchronization errors in many applications by introducing very general channel simulations which efficiently transform an insertion-deletion channel into a regular symbol corruption channel with an error rate larger by a constant factor and a slightly smaller alphabet. We utilize and generalize synchronization string based methods which were recently introduced as a tool to design essentially optimal error correcting codes for insertion-deletion channels. Our channel simulations depend on the fact that, at the cost of increasing the error rate by a constant factor, synchronization strings can be decoded in a streaming manner that preserves linearity of time. Interestingly, we provide a lower bound showing that this constant factor cannot be improved to 1+epsilon, in contrast to what is achievable for error correcting codes. Our channel simulations drastically and cleanly generalize the applicability of synchronization strings. We provide new interactive coding schemes which simulate any interactive two-party protocol over an insertion-deletion channel. Our results improve over the interactive coding schemes of Braverman et al. [TransInf `17] and Sherstov and Wu [FOCS `17] which achieve a small constant rate and require exponential time computations with respect to computational and communication complexities. We provide the first computationally efficient interactive coding schemes for synchronization errors, the first coding scheme with a rate approaching one for small noise rates, and also the first coding scheme that works over arbitrarily small alphabet sizes. We also show tight connections between synchronization strings and edit-distance tree codes which allow us to transfer results from tree codes directly to edit-distance tree codes. Finally, using on our channel simulations, we provide an explicit low-rate binary insertion-deletion code that improves over the state-of-the-art codes by Guruswami and Wang [TransInf `17] in terms of rate-distance trade-off.

Cite as

Bernhard Haeupler, Amirbehshad Shahrasbi, and Ellen Vitercik. Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2018.75,
  author =	{Haeupler, Bernhard and Shahrasbi, Amirbehshad and Vitercik, Ellen},
  title =	{{Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.75},
  URN =		{urn:nbn:de:0030-drops-90794},
  doi =		{10.4230/LIPIcs.ICALP.2018.75},
  annote =	{Keywords: Synchronization Strings, Channel Simulation, Coding for Interactive Communication}
}
Document
Synchronization Strings: List Decoding for Insertions and Deletions

Authors: Bernhard Haeupler, Amirbehshad Shahrasbi, and Madhu Sudan

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We study codes that are list-decodable under insertions and deletions ("insdel codes"). Specifically, we consider the setting where, given a codeword x of length n over some finite alphabet Sigma of size q, delta * n codeword symbols may be adversarially deleted and gamma * n symbols may be adversarially inserted to yield a corrupted word w. A code is said to be list-decodable if there is an (efficient) algorithm that, given w, reports a small list of codewords that include the original codeword x. Given delta and gamma we study what is the rate R for which there exists a constant q and list size L such that there exist codes of rate R correcting delta-fraction insertions and gamma-fraction deletions while reporting lists of size at most L. Using the concept of synchronization strings, introduced by the first two authors [Proc. STOC 2017], we show some surprising results. We show that for every 0 <= delta < 1, every 0 <= gamma < infty and every epsilon > 0 there exist codes of rate 1 - delta - epsilon and constant alphabet (so q = O_{delta,gamma,epsilon}(1)) and sub-logarithmic list sizes. Furthermore, our codes are accompanied by efficient (polynomial time) decoding algorithms. We stress that the fraction of insertions can be arbitrarily large (more than 100%), and the rate is independent of this parameter. We also prove several tight bounds on the parameters of list-decodable insdel codes. In particular, we show that the alphabet size of insdel codes needs to be exponentially large in epsilon^{-1}, where epsilon is the gap to capacity above. Our result even applies to settings where the unique-decoding capacity equals the list-decoding capacity and when it does so, it shows that the alphabet size needs to be exponentially large in the gap to capacity. This is sharp contrast to the Hamming error model where alphabet size polynomial in epsilon^{-1} suffices for unique decoding. This lower bound also shows that the exponential dependence on the alphabet size in previous works that constructed insdel codes is actually necessary! Our result sheds light on the remarkable asymmetry between the impact of insertions and deletions from the point of view of error-correction: Whereas deletions cost in the rate of the code, insertion costs are borne by the adversary and not the code! Our results also highlight the dominance of the model of insertions and deletions over the Hamming model: A Hamming error is equal to one insertion and one deletion (at the same location). Thus the effect of delta-fraction Hamming errors can be simulated by delta-fraction of deletions and delta-fraction of insertions - but insdel codes can deal with much more insertions without loss in rate (though at the price of higher alphabet size).

Cite as

Bernhard Haeupler, Amirbehshad Shahrasbi, and Madhu Sudan. Synchronization Strings: List Decoding for Insertions and Deletions. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2018.76,
  author =	{Haeupler, Bernhard and Shahrasbi, Amirbehshad and Sudan, Madhu},
  title =	{{Synchronization Strings: List Decoding for Insertions and Deletions}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.76},
  URN =		{urn:nbn:de:0030-drops-90807},
  doi =		{10.4230/LIPIcs.ICALP.2018.76},
  annote =	{Keywords: List Decoding, Insertions and Deletions, Synchronization Strings}
}
Document
Algorithms for Noisy Broadcast with Erasures

Authors: Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
The noisy broadcast model was first studied by [Gallager, 1988] where an n-character input is distributed among n processors, so that each processor receives one input bit. Computation proceeds in rounds, where in each round each processor broadcasts a single character, and each reception is corrupted independently at random with some probability p. [Gallager, 1988] gave an algorithm for all processors to learn the input in O(log log n) rounds with high probability. Later, a matching lower bound of Omega(log log n) was given by [Goyal et al., 2008]. We study a relaxed version of this model where each reception is erased and replaced with a `?' independently with probability p, so the processors have knowledge of whether a bit has been corrupted. In this relaxed model, we break past the lower bound of [Goyal et al., 2008] and obtain an O(log^* n)-round algorithm for all processors to learn the input with high probability. We also show an O(1)-round algorithm for the same problem when the alphabet size is Omega(poly(n)).

Cite as

Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty. Algorithms for Noisy Broadcast with Erasures. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 153:1-153:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ICALP.2018.153,
  author =	{Grossman, Ofer and Haeupler, Bernhard and Mohanty, Sidhanth},
  title =	{{Algorithms for Noisy Broadcast with Erasures}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{153:1--153:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.153},
  URN =		{urn:nbn:de:0030-drops-91576},
  doi =		{10.4230/LIPIcs.ICALP.2018.153},
  annote =	{Keywords: noisy broadcast, error correction, erasures, distributed computing with noise}
}
Document
Making Asynchronous Distributed Computations Robust to Channel Noise

Authors: Keren Censor-Hillel, Ran Gelles, and Bernhard Haeupler

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We consider the problem of making distributed computations robust to noise, in particular to worst-case (adversarial) corruptions of messages. We give a general distributed interactive coding scheme which simulates any asynchronous distributed protocol while tolerating a maximal corruption level of \Theta(1/n)-fraction of all messages. Our noise tolerance is optimal and is obtained with only a moderate overhead in the number of messages. Our result is the first fully distributed interactive coding scheme in which the topology of the communication network is not known in advance. Prior work required either a coordinating node to be connected to all other nodes in the network or assumed a synchronous network in which all nodes already know the complete topology of the network. Overcoming this more realistic setting of an unknown topology leads to intriguing distributed problems, in which nodes try to learn sufficient information about the network topology in order to perform efficient coding and routing operations for coping with the noise. What makes these problems hard is that these topology exploration computations themselves must already be robust to noise.

Cite as

Keren Censor-Hillel, Ran Gelles, and Bernhard Haeupler. Making Asynchronous Distributed Computations Robust to Channel Noise. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.ITCS.2018.50,
  author =	{Censor-Hillel, Keren and Gelles, Ran and Haeupler, Bernhard},
  title =	{{Making Asynchronous Distributed Computations Robust to Channel Noise}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.50},
  URN =		{urn:nbn:de:0030-drops-83184},
  doi =		{10.4230/LIPIcs.ITCS.2018.50},
  annote =	{Keywords: Distributed Computation, Coding for Interactive Communication, Noise- Resilient Computation, Coding Theory}
}
Document
Communication with Partial Noiseless Feedback

Authors: Bernhard Haeupler, Pritish Kamath, and Ameya Velingker

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
We introduce the notion of one-way communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves sending a sequence of bits over a channel while receiving feedback bits from Bob for delta fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice's transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of 0 <= delta <= 1, such that Alice can send a message to Bob via some protocol with delta fraction of noiseless feedback. The case delta = 1 corresponds to full feedback, in which the result of Berlekamp ['64] implies that the maximum tolerable error rate is 1/3, while the case delta = 0 corresponds to no feedback, in which the maximum tolerable error rate is 1/4, achievable by use of a binary error-correcting code. In this work, we show that for any delta in (0,1] and gamma in [0, 1/3), there exists a randomized communication scheme with noiseless delta-feedback, such that the probability of miscommunication is low, as long as no more than a gamma fraction of the rounds are corrupted. Moreover, we show that for any delta in (0, 1] and gamma < f(delta), there exists a deterministic communication scheme with noiseless delta-feedback that always decodes correctly as long as no more than a gamma fraction of rounds are corrupted. Here f is a monotonically increasing, piecewise linear, continuous function with f(0) = 1/4 and f(1) = 1/3. Also, the rate of communication in both cases is constant (dependent on delta and gamma but independent of the input length).

Cite as

Bernhard Haeupler, Pritish Kamath, and Ameya Velingker. Communication with Partial Noiseless Feedback. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 881-897, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.APPROX-RANDOM.2015.881,
  author =	{Haeupler, Bernhard and Kamath, Pritish and Velingker, Ameya},
  title =	{{Communication with Partial Noiseless Feedback}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{881--897},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.881},
  URN =		{urn:nbn:de:0030-drops-53426},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.881},
  annote =	{Keywords: Communication with feedback, Interactive communication, Coding theory Digital}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail